23 research outputs found
Stable Prehensile Pushing: In-Hand Manipulation with Alternating Sticking Contacts
This paper presents an approach to in-hand manipulation planning that
exploits the mechanics of alternating sticking contact. Particularly, we
consider the problem of manipulating a grasped object using external pushes for
which the pusher sticks to the object. Given the physical properties of the
object, frictional coefficients at contacts and a desired regrasp on the
object, we propose a sampling-based planning framework that builds a pushing
strategy concatenating different feasible stable pushes to achieve the desired
regrasp. An efficient dynamics formulation allows us to plan in-hand
manipulations 100-1000 times faster than our previous work which builds upon a
complementarity formulation. Experimental observations for the generated plans
show that the object precisely moves in the grasp as expected by the planner.
Video Summary -- youtu.be/qOTKRJMx6HoComment: IEEE International Conference on Robotics and Automation 201
Prehensile Pushing: In-hand Manipulation with Push-Primitives
This paper explores the manipulation of a grasped object by pushing it against its environment. Relying on precise arm motions and detailed models of frictional contact, prehensile pushing enables dexterous manipulation with simple manipulators, such as those currently available in industrial settings, and those likely affordable by service and field robots. This paper is concerned with the mechanics of the forceful interaction between a gripper, a grasped object, and its environment. In particular, we describe the quasi-dynamic motion of an object held by a set of point, line, or planar rigid frictional contacts and forced by an external pusher (the environment). Our model predicts the force required by the external pusher to “break” the equilibrium of the grasp and estimates the instantaneous motion of the object in the grasp. It also captures interesting behaviors such as the constraining effect of line or planar contacts and the guiding effect of the pusher’s motion on the objects’s motion. We evaluate the algorithm with three primitive prehensile pushing actions—straight sliding, pivoting, and rolling—with the potential to combine into a broader in-hand manipulation capability.National Science Foundation (U.S.). National Robotics Initiative (Award NSF-IIS-1427050)Karl Chang Innovation Fund Awar
Pick2Place: Task-aware 6DoF Grasp Estimation via Object-Centric Perspective Affordance
The choice of a grasp plays a critical role in the success of downstream
manipulation tasks. Consider a task of placing an object in a cluttered scene;
the majority of possible grasps may not be suitable for the desired placement.
In this paper, we study the synergy between the picking and placing of an
object in a cluttered scene to develop an algorithm for task-aware grasp
estimation. We present an object-centric action space that encodes the
relationship between the geometry of the placement scene and the object to be
placed in order to provide placement affordance maps directly from perspective
views of the placement scene. This action space enables the computation of a
one-to-one mapping between the placement and picking actions allowing the robot
to generate a diverse set of pick-and-place proposals and to optimize for a
grasp under other task constraints such as robot kinematics and collision
avoidance. With experiments both in simulation and on a real robot we
demonstrate that with our method, the robot is able to successfully complete
the task of placement-aware grasping with over 89% accuracy in such a way that
generalizes to novel objects and scenes.Comment: IEEE International Conference on Robotics and Automation 202
HandNeRF: Learning to Reconstruct Hand-Object Interaction Scene from a Single RGB Image
This paper presents a method to learn hand-object interaction prior for
reconstructing a 3D hand-object scene from a single RGB image. The inference as
well as training-data generation for 3D hand-object scene reconstruction is
challenging due to the depth ambiguity of a single image and occlusions by the
hand and object. We turn this challenge into an opportunity by utilizing the
hand shape to constrain the possible relative configuration of the hand and
object geometry. We design a generalizable implicit function, HandNeRF, that
explicitly encodes the correlation of the 3D hand shape features and 2D object
features to predict the hand and object scene geometry. With experiments on
real-world datasets, we show that HandNeRF is able to reconstruct hand-object
scenes of novel grasp configurations more accurately than comparable methods.
Moreover, we demonstrate that object reconstruction from HandNeRF ensures more
accurate execution of a downstream task, such as grasping for robotic
hand-over.Comment: 9 pages, 4 tables, 7 figure
RICo: Rotate-Inpaint-Complete for Generalizable Scene Reconstruction
General scene reconstruction refers to the task of estimating the full 3D
geometry and texture of a scene containing previously unseen objects. In many
practical applications such as AR/VR, autonomous navigation, and robotics, only
a single view of the scene may be available, making the scene reconstruction a
very challenging task. In this paper, we present a method for scene
reconstruction by structurally breaking the problem into two steps: rendering
novel views via inpainting and 2D to 3D scene lifting. Specifically, we
leverage the generalization capability of large language models to inpaint the
missing areas of scene color images rendered from different views. Next, we
lift these inpainted images to 3D by predicting normals of the inpainted image
and solving for the missing depth values. By predicting for normals instead of
depth directly, our method allows for robustness to changes in depth
distributions and scale. With rigorous quantitative evaluation, we show that
our method outperforms multiple baselines while providing generalization to
novel objects and scenes
simPLE: a visuotactile method learned in simulation to precisely pick, localize, regrasp, and place objects
Existing robotic systems have a clear tension between generality and
precision. Deployed solutions for robotic manipulation tend to fall into the
paradigm of one robot solving a single task, lacking precise generalization,
i.e., the ability to solve many tasks without compromising on precision. This
paper explores solutions for precise and general pick-and-place. In precise
pick-and-place, i.e. kitting, the robot transforms an unstructured arrangement
of objects into an organized arrangement, which can facilitate further
manipulation. We propose simPLE (simulation to Pick Localize and PLacE) as a
solution to precise pick-and-place. simPLE learns to pick, regrasp and place
objects precisely, given only the object CAD model and no prior experience. We
develop three main components: task-aware grasping, visuotactile perception,
and regrasp planning. Task-aware grasping computes affordances of grasps that
are stable, observable, and favorable to placing. The visuotactile perception
model relies on matching real observations against a set of simulated ones
through supervised learning. Finally, we compute the desired robot motion by
solving a shortest path problem on a graph of hand-to-hand regrasps. On a
dual-arm robot equipped with visuotactile sensing, we demonstrate
pick-and-place of 15 diverse objects with simPLE. The objects span a wide range
of shapes and simPLE achieves successful placements into structured
arrangements with 1mm clearance over 90% of the time for 6 objects, and over
80% of the time for 11 objects. Videos are available at
http://mcube.mit.edu/research/simPLE.html .Comment: 33 pages, 6 figures, 2 tables, submitted to Science Robotic